Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Sink location algorithm of power domain nonorthogonal multiple access for real-time industrial internet of things
SUN Yuan, SHEN Wenjian, NI Pengbo, MAO Min, XIE Yaqi, XU Chaonong
Journal of Computer Applications    2023, 43 (1): 209-214.   DOI: 10.11772/j.issn.1001-9081.2021111946
Abstract242)   HTML11)    PDF (2234KB)(79)       Save
Aiming at the shortcoming of large access delay in industrial Internet of Things (IoT), a sink location algorithm of Power Domain NonOrthogonal Multiple Access (PD-NOMA) for real-time industrial IoT was proposed. In this algorithm, based on the PD-NOMA technology, the location of the sink was used as an optimization method to minimize access delay by realizing power division multiplexing among users as much as possible. Firstly, for any two users, an assertion that the decodable area of the qualified sink must be a circle if parallel transmissions are successful was proven, and therefore, the decodable area set of the sink was able to be obtained by combining all of the combinations of two users, and every minimal intersection of the area set must be a convex region. So, the optimal location of the sink must be included in these minimal intersection areas. Secondly, for each minimal intersection area where the sink was deployed, the minimum number of chain partition of the network generation graph in the area was computed and used as the metric for evaluating the access delay. Finally, the optimal location of the sink was determined by comparing these minimum number of chain partitioning. Experimental results show that when the decoding threshold is 2 and the number of users is 30, the average access delay of the proposed algorithm is about 36.7% of that of the classic time division multiple access, and besides, it can be decreased almost linearly with the decrease of the decoding threshold and the increase of the channel decay factor. The proposed algorithm can provide reference from the access layer perspective for massive ultra-reliable low-latency communications.
Reference | Related Articles | Metrics
Event description generation based on generative adversarial network
SUN Heli, SUN Yuzhu, ZHANG Xiaoyun
Journal of Computer Applications    2021, 41 (5): 1256-1261.   DOI: 10.11772/j.issn.1001-9081.2020081242
Abstract369)      PDF (971KB)(701)       Save
In Event-Based Social Networks (EBSNs), generating the event description of social events automatically is helpful for the organizer, so as to avoid the problems of poor description, descripting too much and low accuracy, and be easy to form rich, accurate and attractive event description. In order to automatically generate text that is sufficiently similar to true event description, a Generative Adversarial Network (GAN) model named GAN_PG was proposed to generate event description. In the GAN_PG model, the Variational Auto-Encoder (VAE) was used as the generator, and the neural network with the Gated Recurrent Unit (GRU) was used as the discriminator. In the model training, the Policy Gradient (PG) decline in reinforcement learning was used as reference, and a reasonable reward function was designed to train the generator to generate event description. Experimental results showed that the BLEU-4 value of the event description generated by GAN_PG reached 0.67, which proved that the event description generation model GAN_PG can generate event descriptions sufficiently similar to natural language in an unsupervised way.
Reference | Related Articles | Metrics
Social event participation prediction based on event description
SUN Heli, SUN Yuzhu, ZHANG Xiaoyun
Journal of Computer Applications    2020, 40 (11): 3101-3106.   DOI: 10.11772/j.issn.1001-9081.2020030418
Abstract433)      PDF (676KB)(624)       Save
In the related research of Event Based Social Networks (EBSNs), it is difficult to predict the participation of social events based on event description. The related studies are very limited, and the research difficulty mainly comes from the evaluation subjectivity of event description and limitations of language modeling algorithms. To solve these problems, first the concepts of successful event, similar event and event similarity were defined. Based on these concepts, the social data collected from the Meetup platform was extracted. At the same time, the analysis and prediction methods based on Lasso regression, Convolutional Neural Network (CNN) and Gated Recurrent Neural Network (GRNN) were separately designed. In the experiment, part of the extracted data was selected to train the three models, and the remaining data was used for the analysis and prediction. The results showed that, compared with the events without event description, the prediction accuracy of the events processed by the Lasso regression model was improved by 2.35% to 3.8% in different classifiers, and the prediction accuracy of the events processed by the GRNN model was improved by 4.5% to 8.9%, and the result of the CNN model processing was not ideal. This study proves that event description can improve event participation, and the GRNN model has the highest prediction accuracy among the three models.
Reference | Related Articles | Metrics
Text-to-image synthesis method based on multi-level structure generative adversarial networks
SUN Yu, LI Linyan, YE Zihan, HU Fuyuan, XI Xuefeng
Journal of Computer Applications    2019, 39 (11): 3204-3209.   DOI: 10.11772/j.issn.1001-9081.2019051077
Abstract459)      PDF (1012KB)(530)       Save
In recent years, the Generative Adversarial Network (GAN) has achieved remarkable success in text-to-image synthesis, but there are still problems such as edge blurring of images, unclear local textures, small sample variance. In view of the above shortcomings, based on Stack Generative Adversarial Network model (StackGAN++), a Multi-Level structure Generative Adversarial Networks (MLGAN) model was proposed, which is composed of multiple generators and discriminators in a hierarchical structure. Firstly, hierarchical structure coding method and word vector constraint were introduced to change the condition vector of generator of each level in the network, so that the edge details and local textures of the image were clearer and more vivid. Then, the generator and the discriminator were jointed by trained to approximate the real image distribution by using the generated image distribution of multiple levels, so that the variance of the generated sample became larger, and the diversity of the generated sample was increased. Finally, different scale images of the corresponding text were generated by generators of different levels. The experimental results show that the Inception scores of the MLGAN model reached 4.22 and 3.88 respectively on CUB and Oxford-102 datasets, which were respectively 4.45% and 3.74% higher than that of StackGAN++. The MLGAN model has improvement in solving edge blurring and unclear local textures of the generated image, and the image generated by the model is closer to the real image.
Reference | Related Articles | Metrics
Task allocation mechanism for crowdsourcing system based on reliability of users
SHI Zhan, XIN Yu, SUN Yu'e, HUANG He
Journal of Computer Applications    2017, 37 (9): 2449-2453.   DOI: 10.11772/j.issn.1001-9081.2017.09.2449
Abstract769)      PDF (789KB)(519)       Save
Considering the shortcomings of existing research on the problem of user reliability in crowdsourcing systems, it was assumed that each user had different reliability for different type of tasks, and on this basis, a task allocation mechanism for crowdsourcing system was designed based on the reliability of users. Firstly, an efficient task allocation mechanism was designed by using the greedy technology to maximize the profit of task publishers, and the task allocation scheme with the maximum benefit was chosen every time. Secondly, a mechanism of user reliability updating based on historical information was designed and determined by user historical reliability and the quality of the current task, and the final payment paid to the user was linked with the reliability of the user, so as to motivate the user to finish tasks with high quality continuously. Finally, the effectiveness of the designed mechanisms was analyzed in three ways:the total profit of task publishers, the task completion rate and the user reliability. The simulation results show that compared with ProMoT (Profit Maximizing Truthful auction mechanism), the proposed method is more effective and feasible, and the rate of the total benefit of task publishers is 16% higher. At the same time, it can solve the problem of user unreliability in the existing methods, and increase the reliability of crowdsourcing systems and the total revenue of task publishers.
Reference | Related Articles | Metrics
Pencil drawing rendering based on textures and sketches
SUN Yuhong, ZHANG Yuanke, MENG Jing, HAN Lijuan
Journal of Computer Applications    2016, 36 (7): 1976-1980.   DOI: 10.11772/j.issn.1001-9081.2016.07.1976
Abstract414)      PDF (853KB)(305)       Save
Concerning the problem in pencil drawing generation that the pencil lines lack flexibility and textures lack directions, a method of combining directional textures and pencil sketches was proposed to produce pencil drawing from natural images. First, histogram matching was employed to obtain the tone map of the image, and an image was segmented into several regions according to color. For each region, tone and direction were computed by its color and its shape, to decide the final tone and direction of the pencil drawing. Then, an adjusted linear convolution was used to get the pencil sketches with certain randomness. Finally, the directional textures and sketches were combined to get the pencil drawing style. Different kinds of natural images could be converted to pencil drawings by the proposed method, and the renderings were compared with those of existing methods including line integral convolution and tone based method. The experimental results demonstrate that the directional texture can stimulate the manual pencil texture better and the adjusted sketches can mimic the randomness and flexibility of manual pencil drawings.
Reference | Related Articles | Metrics
Mesh simplification algorithm combined with edge collapse and local optimization
LIU Jun, FAN Hao, SUN Yu, LU Xiangyan, LIU Yan
Journal of Computer Applications    2016, 36 (2): 535-540.   DOI: 10.11772/j.issn.1001-9081.2016.02.0535
Abstract496)      PDF (927KB)(998)       Save
Aiming at the problems that the detail features of the mesh models are lost and the qualities of meshes are bad when the three-dimensional models are simplified to a lower resolution with the mesh simplification algorithms, a high quality mesh simplification algorithm was proposed based on feature preserving. By introducing the concept of approximate curvature of the vertex and combining it with the error matrix of the edge collapse in the algorithm, the detail features of the simplified model were reserved to a great extent. At the same time, by analyzing the quality of simplified triangular mesh, optimizing triangular mesh locally, reducing the amount of narrow triangles, the quality of simplified model was improved. The proposed algorithm was tested on Apple model and Horse model, and compared with two algorithms, one of them is a classical mesh simplification algorithm based on edge collapse, the other is an improved algorithm of the classical one. The experimental results show that when the models are simplified to a lower resolution, the triangular meshes of two contrast algorithms are too evenly distributed, and the local details are not clear, while the triangular meshes of the proposed algorithm are intensive in the areas with large curvature but sparse in the flat areas, and the local details are legible. Compared with the two contrast algorithms, the geometric errors of the simplified model in the proposed algorithm are of the same magnitude; the average qualities of the simplified meshes in the proposed algorithm are much higher than those of two contrast algorithms. The results verify that not only the proposed algorithm can efficiently maintain the detail features of the original model, but also the simplified model has high quality and looks better.
Reference | Related Articles | Metrics
FP-MFIA: improved algorithm for mining maximum frequent itemsets based on frequent-pattern tree
YANG Pengkun, PENG Hui, ZHOU Xiaofeng, SUN Yuqing
Journal of Computer Applications    2015, 35 (3): 775-778.   DOI: 10.11772/j.issn.1001-9081.2015.03.775
Abstract593)      PDF (591KB)(633)       Save

Focusing on the drawback that Discovering Maximum Frequent Itemsets Algorithm (DMFIA) has to generate lots of maximum frequent candidate itemsets in each dimension when given datasets with many candidate items and each maximum frequent itemset is not long, an improved Algorithm for mining Maximum Frequent Itemsets based of Frequent-Pattern tree (FP-MFIA) for mining maximum frequent itemsets based on FP-tree was proposed. According to Htable of FP-tree, this algorithm used bottom-up searches to mine maximum frequent itemsets, thus accelerated the count of candidates. Producing infrequent itemsets with lower dimension according to conditional pattern base of every layer when mining, cutting and reducing dimensions of candidate itemsets can largely reduce the amount of candidate itemsets. At the same time taking full advantage of properties of maximum frequent itemsets will reduce the search space. The time efficiency of FP-MFIA is at least two times as much as the algorithm of DMFIA and BDRFI (algorithm for mining frequent itemsets based on dimensionality reduction of frequent itemset) according to computational time contrast based on different supports. It shows that FP-MFIA has a clear advantage when candidate itemsets are with high dimension.

Reference | Related Articles | Metrics
Novel quantum differential evolutionary algorithm for blocking flowshop scheduling
QI Xuemei, WANG Hongtao, CHEN Fulong, TANG Qimei, SUN Yunxiang
Journal of Computer Applications    2015, 35 (3): 663-667.   DOI: 10.11772/j.issn.1001-9081.2015.03.663
Abstract462)      PDF (746KB)(563)       Save

A Novel Quantum Differential Evolutionary (NQDE) algorithm was proposed for the Blocking Flowshop Scheduling Problem (BFSP) to minimize the makespan. The NQDE algorithm combined Quantum Evolutionary Algorithm (QEA) with Differential Evolution (DE) algorithm, and a novel quantum rotating gate was designed to control the evolutionary trend and increase the diversity of population. An effective Quantum-inspired Evolutionary Algorithm-Variable Neighborhood Search (QEA-VNS) co-evolutionary strategy was also developed to enhance the global search ability of the algorithm and to further improve the solution quality. The proposed algorithm was tested on the Taillard's benchmark instances, and the results show that the number of optimal solutions obtained by NQDE is bigger than the current better heuristic algorithm-Improved Nawaz-Enscore-Ham Heuristic (INEH) evidently. Specifically, the optimal solutions of 64 instances in the 110 instances are improved by NQDE. Moreover, the performance of NQDE is superior to the valid meta-heuristic algorithm-New Modified Shuffled Frog Leaping Algorithm (NMSFLA) and Hybrid Quantum DE (HQDE), and the Average Relative Percentage Deviation (ARPD) of NQDE algorithm decreases by 6% contrasted with the latter ones. So it is proved that NQDE algorithm is suitable for the large scale BFSP.

Reference | Related Articles | Metrics
Distributed clustering algorithm with high communication efficiency for streaming data
ZHU Qiang SUN Yuqiang
Journal of Computer Applications    2014, 34 (9): 2505-2509.   DOI: 10.11772/j.issn.1001-9081.2014.09.2505
Abstract264)      PDF (770KB)(415)       Save

The resources of sensor nodes are limited, while high communication overhead will consume much power. In order to reduce the communication overhead of distributed streaming data clustering algorithm, a new efficient algorithm with two phases, including online local clustering and offline coordinate clustering, was proposed. The online local clustering algorithm clustered data on each remote stream data source, then sent the results to the collaborative node by serialization method. The collaborative node collected and analyzed all local clusters to get the global clusters. The experimental results show that the time for sending data is constant, the time for clustering and total time linearly grow with increasing size of sliding window, which means that the execution time of the algorithm is not affected by sliding window size and cluster number. The accuracy of the proposed algorithm is close to centralized algorithm, and the communication overhead is far less than distributed algorithm. The experimental results show that the proposed algorithm has good scalability, and can be applied to the clustering analysis of distributed large-scale streaming data.

Reference | Related Articles | Metrics
Smoothening in surface blending of quadric algebraic surfaces
LI Yaohui XUAN Zhaocheng WU Zhifeng SUN Yuan
Journal of Computer Applications    2014, 34 (7): 2054-2057.   DOI: 10.11772/j.issn.1001-9081.2014.07.2054
Abstract364)      PDF (643KB)(693)       Save

To solve the problem of discontinuity when blending two surfaces with coplanar perpendicular axis, this paper discussed how to improve the equations about the blending surface so as to obtain the smooth and continuous blending surface. At first, this paper analyzed the reason of the uncontinuousness in the blending surface and pointed out that the items in one variable were removed when other variables equaled to some specified values. In this case, the blending equation was independent to this variable in these values and this indicated that the belending surface was disconnected. Then, a method which guarantees the blending surface countinuous was presented on the basis of above discussion. Besides this, this paper discussed how to smoothen it once the continuous blending surface was computed out. As for the G0 blending surface, regarding the polynomial of auxiliary surface as a factor, this factor was mulitiplied to a function f′ with degree one and the result was added to the primary surface fi. The smoothness of blending surface can be implemented by changing the coefficients in f. For the Gn blending surface, a compensated polynomial with degree at most 2 was added to the proposed primary blending equation directly when computing blending surface. This method smoothens the blending surface but does not increase the degree of G0 blending surface.

Reference | Related Articles | Metrics
Active congestion control strategy based on historical probability in delay tolerant networks
SHEN Jian XIA Jingbo FU Kai SUN Yu
Journal of Computer Applications    2014, 34 (3): 644-648.   DOI: 10.11772/j.issn.1001-9081.2014.03.0644
Abstract468)      PDF (739KB)(357)       Save

To solve the congestion problem at node in delay tolerant networks, an active congestion control strategy based on historical probability was proposed. The strategy put forward the concept of referenced probability that could be adjusted dynamically by the degree of congestion. Referenced probability would control the forwarding conditions to avoid and control the congestion at node. At the same time the utilization of idle resources and the transmission efficiency of the network would be promoted. The simulation results show that the strategy upgrades delivery ratio of the entire network and reduces the load ratio and message loss rate. As a result, the active congestion control is realized and the transmission performance of the network is enhanced.

Related Articles | Metrics
Self-adaptive microblog hot topic tracking method using term correlation
SUN Yuexin MA Huifang SHI Yakai CUI Tong
Journal of Computer Applications    2014, 34 (12): 3497-3501.  
Abstract173)      PDF (760KB)(696)       Save

Aiming at the deficiency of traditional text representation model, which usually ignores term correlation, and topic drifting problem during topic tracking, this paper propose an approach called self-adaptive microblog hot topic tracking method using terms correlation. Mutual information between terms in the same and different microblogs are investigated. Then the conventional text representation model is updated. Similarity calculation is performed to decide whether it is the subsequent discussions of a certain hot topic. Finally, the vectors of microblogs are updated to avoid topic drifting. Experiments show the effectiveness of our method.

Reference | Related Articles | Metrics
Improved MPEG-2 video coding scheme based on compressed sensing
JDUAN Ji-zhong ZHANG Li-yi LIU Yu SUN Yun-shan
Journal of Computer Applications    2012, 32 (12): 3411-3414.   DOI: 10.3724/SP.J.1087.2012.03411
Abstract791)      PDF (633KB)(475)       Save
In order to seek for applications in video coding of Compressed Sensing (CS) and improve the coding efficiency of MPEG-2, a CS and MPEG-2 based improved scheme was proposed. The improved video coding scheme chose the method producing an image with smaller Sum of Squared Differences (SSD) as the final reconstruction method between the standard reconstruction method and the Total Variation (TV) minimization algorithm in the pixel domain, which is based on the fact that the original image has sparser gradient than the residual image. The experimental results show that the proposed scheme is efficient for all kinds of video sequences. The improvement of Peak Signal-to-Noise Ratio (PSNR) is greater than 0.5dB for the sequences with sharp edges, and 0.26dB~0.41dB for sequences with smooth areas or complex textures.
Related Articles | Metrics
H.264 scalable video coding inter-layer rate control
YANG Jin SUN Yu SUN Shi-xin
Journal of Computer Applications    2011, 31 (09): 2457-2460.   DOI: 10.3724/SP.J.1087.2011.02457
Abstract1175)      PDF (594KB)(405)       Save
An adaptive inter-layer rate control scheme was proposed for H.264/AVC scalable extension. A switched model was put forward to predict the number of bits used for encoding inter frame either from the previous frame of the current layer or from the current frame of the previous layer. First, a Rate-Complexity-Quantization (R-C-Q) model was extended in scalable video coding. Second, the Proportional+Integral+Derivative (PID) buffer controller was adopted to provide the inter frame bit estimation according to the buffer state. Third, to achieve more accurate prediction when an abrupt change happens, the bit estimation was predicted from the actual bit of the current frame of the previous layer. Finally, a switched model was used to decide the bit estimation, and then the Quantization Parameter (QP) could be calculated according to the R-C-Q model. The simulation results demonstrate that the proposed algorithm outperforms JVT-W043 rate control algorithm by providing more accurate output bit rate for each layer, maintaining stable buffer fullness, reducing frame skipping and quality fluctuation, and improves the overall coding quality.
Related Articles | Metrics
Research of constraint constant modulus medical CT image blind equalization algorithm
SUN Yunshan ZHANG Liyi DUAN Jizhong
Journal of Computer Applications    2011, 31 (06): 1575-1577.   DOI: 10.3724/SP.J.1087.2011.01575
Abstract1054)      PDF (445KB)(379)       Save
In order to improve the Peak Signal-to-Noise Ratio (PSNR) of images, and to raise the reliability and restoration effects of the algorithm, the degradation and restoration process of image was transformed by a linear transform to be equivalent to one dimensional convolution. A constraint constant modulus cost function of blind equalization applied to medical CT images was founded, and its convexity was proved. A blind equalization algorithm based on dimension reduction was proposed. Computer simulations demonstrate the effectiveness of the algorithm. Matrix inversion is avoided to improve the efficiency of operations.
Related Articles | Metrics
Computation of radar cross section in high frequency region for SAR imaging simulation of ship targets
SUN Yu-Kang WANG Run-Sheng LIU Fang QI Bin
Journal of Computer Applications   
Abstract1864)      PDF (710KB)(1239)       Save
To compute Radar Cross Section (RCS) in high frequency region, a method that combined with fast modeling and improved graphic electromagnetic computing was presented. Using the collecting information of ship and the modeling software, the figure of the ship was modeled accurately, the radar cross section was computed exactly by improved graphic electromagnetic computing algorithms integrated Physical Optics (PO) method and Incremental Length Diffraction Coefficients (ILDC) method, simulation experiment shows that it can get a good simulation result of SAR images by this nearly real-time method.
Related Articles | Metrics
Study on Chinese keyword extraction algorithm based on Naive Bayes model推
CHENG Lan-lan,HE Pi-lian,SUN Yue-heng
Journal of Computer Applications    2005, 25 (12): 2780-2782.  
Abstract2203)      PDF (526KB)(1664)       Save
A keyword extraction algorithm for Chinese documents based on Na ve Bayes model was proposed,which involved training and testing process.Parameters of the model were first obtained during training process,and then the probability of a word to be a keyword was computed based on the model during testing process.Experiment results show that the algorithm can extract more accurate keywords from a small scale document collection compared with traditional approach of if*idf.Moreover,it can flexibly extend feature items that indicate the importance of words,so it has a good scalability.
Related Articles | Metrics
Research on text hierarchical clustering algorithm based on K-Means
YU Jing-hui,HE Pi-lian,SUN Yue-heng
Journal of Computer Applications    2005, 25 (10): 2323-2324.  
Abstract1727)      PDF (378KB)(1132)       Save
A new text hierarchical clustering algorithm based on K-Means was presented,which combined features from both K-Means and agglomerative approach that allowed them to reduce the early-stage errors made by agglomerative method and hence improved the quality of clustering solutions.The experimental evaluation shows that,our algorithm leads to better solutions than agglomerative methods.
Related Articles | Metrics
Research on the techniques of security events correlation
GAO Lei, XIAO Zheng, WEI Wei, SUN Yun-ning
Journal of Computer Applications    2005, 25 (07): 1526-1528.  
Abstract1140)      PDF (640KB)(821)       Save

The events correlation techniques in security integration management systems were introduced. A normal architecture of the correlation engine was introduced, and some discussions on the critical technologies and the main achievements in the field were put forward. The directions of the technology development were analyzed and evaluated, such as pattern obtainment, engine distribution and performance promotion. At last, a solution based on hierarchical rules to correlate events was presented.

Reference | Related Articles | Metrics
RFID technology and its application in indoor positioning
SUN Yu, FAN Ping-zhi
Journal of Computer Applications    2005, 25 (05): 1205-1208.   DOI: 10.3724/SP.J.1087.2005.1205
Abstract984)      PDF (194KB)(3785)       Save
Based on the analysis of the basic principle and characteristics of RFID, the existing RFID indoor location-sensing system LANDMARC was discussed in detail. Then, an enhanced nearest neighbor positioning algorithm was proposed based on multi-level error processing. Simulation results show that the enhanced nearest neighbor algorithm can achieve better positioning accuracy than the existing nearest neighbor algorithms.
Related Articles | Metrics
Fractal dimension fusion method based on image pyramid
SUN Yu-qiu,TIAN Jin-wen, LIU Jian
Journal of Computer Applications    2005, 25 (05): 1064-1065.   DOI: 10.3724/SP.J.1087.2005.1064
Abstract1144)      PDF (168KB)(849)       Save
Data fusion is an important technique to detect and recognize target from images. However, loss of information is unavoidable in fusion process. It is crucial to select fusion algorithm and avoid the loss of useful information. Because images in different levels of image pyramid were self-similar, which was the foundation of fractal, a new image fusion algorithm was presented based on image pyramid and fractal dimension. Different source images were decomposed into image pyramid sequence with different scales. Corresponding level images were merged, with fractal dimension as weight. An experiment was given using a mid-wave and a long-wave infrared image as two source images. Experiment results show the algorithm is feasible.
Related Articles | Metrics